Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Bioengineering (Basel) ; 11(3)2024 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-38534568

RESUMEN

Segmenting and classifying nuclei in H&E histopathology images is often limited by the long-tailed distribution of nuclei types. However, the strong generalization ability of image segmentation foundation models like the Segment Anything Model (SAM) can help improve the detection quality of rare types of nuclei. In this work, we introduce category descriptors to perform nuclei segmentation and classification by prompting the SAM model. We close the domain gap between histopathology and natural scene images by aligning features in low-level space while preserving the high-level representations of SAM. We performed extensive experiments on the Lizard dataset, validating the ability of our model to perform automatic nuclei segmentation and classification, especially for rare nuclei types, where achieved a significant detection improvement in the F1 score of up to 12%. Our model also maintains compatibility with manual point prompts for interactive refinement during inference without requiring any additional training.

2.
Comput Biol Med ; 170: 108015, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38266467

RESUMEN

Nuclei segmentation plays a crucial role in disease understanding and diagnosis. In whole slide images, cell nuclei often appear overlapping and densely packed with ambiguous boundaries due to the underlying 3D structure of histopathology samples. Instance segmentation via deep neural networks with object clustering is able to detect individual segments in crowded nuclei but suffers from a limited field of view, and does not support amodal segmentation. In this work, we introduce a dense feature pyramid network with a feature mixing module to increase the field of view of the segmentation model while keeping pixel-level details. We also improve the model output quality by adding a multi-scale self-attention guided refinement module that sequentially adjusts predictions as resolution increases. Finally, we enable clusters to share pixels by separating the instance clustering objective function from other pixel-related tasks, and introduce supervision to occluded areas to guide the learning process. For evaluation of amodal nuclear segmentation, we also update prior metrics used in common modal segmentation to allow the evaluation of overlapping masks and mitigate over-penalization issues via a novel unique matching algorithm. Our experiments demonstrate consistent performance across multiple datasets with significantly improved segmentation quality.


Asunto(s)
Algoritmos , Benchmarking , Núcleo Celular , Análisis por Conglomerados , Aprendizaje , Procesamiento de Imagen Asistido por Computador
3.
Neural Netw ; 166: 722-737, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37607423

RESUMEN

Models trained on datasets with texture bias usually perform poorly on out-of-distribution samples since biased representations are embedded into the model. Recently, various image translation and debiasing methods have attempted to disentangle texture biased representations for downstream tasks, but accurately discarding biased features without altering other relevant information is still challenging. In this paper, we propose a novel framework that leverages image translation to generate additional training images using the content of a source image and the texture of a target image with a different bias property to explicitly mitigate texture bias when training a model on a target task. Our model ensures texture similarity between the target and generated images via a texture co-occurrence loss while preserving content details from source images with a spatial self-similarity loss. Both the generated and original training images are combined to train improved classification or segmentation models robust to inconsistent texture bias. Evaluation on five classification- and two segmentation-datasets with known texture biases demonstrates the utility of our method, and reports significant improvements over recent state-of-the-art methods in all cases.

4.
Artículo en Inglés | MEDLINE | ID: mdl-37379192

RESUMEN

Recently, motor imagery (MI) electroencephalography (EEG) classification techniques using deep learning have shown improved performance over conventional techniques. However, improving the classification accuracy on unseen subjects is still challenging due to intersubject variability, scarcity of labeled unseen subject data, and low signal-to-noise ratio (SNR). In this context, we propose a novel two-way few-shot network able to efficiently learn how to learn representative features of unseen subject categories and classify them with limited MI EEG data. The pipeline includes an embedding module that learns feature representations from a set of signals, a temporal-attention module to emphasize important temporal features, an aggregation-attention module for key support signal discovery, and a relation module for final classification based on relation scores between a support set and a query signal. In addition to the unified learning of feature similarity and a few-shot classifier, our method can emphasize informative features in support data relevant to the query, which generalizes better on unseen subjects. Furthermore, we propose to fine-tune the model before testing by arbitrarily sampling a query signal from the provided support set to adapt to the distribution of the unseen subject. We evaluate our proposed method with three different embedding modules on cross-subject and cross-dataset classification tasks using brain-computer interface (BCI) competition IV 2a, 2b, and GIST datasets. Extensive experiments show that our model significantly improves over the baselines and outperforms existing few-shot approaches.

5.
Med Image Comput Comput Assist Interv ; 14221: 521-531, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38204983

RESUMEN

One-shot federated learning (FL) has emerged as a promising solution in scenarios where multiple communication rounds are not practical. Notably, as feature distributions in medical data are less discriminative than those of natural images, robust global model training with FL is non-trivial and can lead to overfitting. To address this issue, we propose a novel one-shot FL framework leveraging Image Synthesis and Client model Adaptation (FedISCA) with knowledge distillation (KD). To prevent overfitting, we generate diverse synthetic images ranging from random noise to realistic images. This approach (i) alleviates data privacy concerns and (ii) facilitates robust global model training using KD with decentralized client models. To mitigate domain disparity in the early stages of synthesis, we design noise-adapted client models where batch normalization statistics on random noise (synthetic images) are updated to enhance KD. Lastly, the global model is trained with both the original and noise-adapted client models via KD and synthetic images. This process is repeated till global model convergence. Extensive evaluation of this design on five small- and three large-scale medical image classification datasets reveals superior accuracy over prior methods. Code is available at https://github.com/myeongkyunkang/FedISCA.

6.
Med Image Anal ; 80: 102482, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35688048

RESUMEN

In digital pathology, segmentation is a fundamental task for the diagnosis and treatment of diseases. Existing fully supervised methods often require accurate pixel-level annotations that are both time-consuming and laborious to generate. Typical approaches first pre-process histology images into patches to meet memory constraints and later perform stitching for segmentation; at times leading to lower performance given the lack of global context. Since image level labels are cheaper to acquire, weakly supervised learning is a more practical alternative for training segmentation algorithms. In this work, we present a weakly supervised framework for histopathology segmentation using only image-level labels by refining class activation maps (CAM) with self-supervision. First, we compress gigapixel histology images with an unsupervised contrastive learning technique to retain high-level spatial context. Second, a network is trained on the compressed images to jointly predict image-labels and refine the initial CAMs via self-supervised losses. In particular, we achieve refinement via a pixel correlation module (PCM) that leverages self-attention between the initial CAM and the input to encourage fine-grained activations. Also, we introduce a feature masking technique that performs spatial dropout on the compressed input to suppress low confidence predictions. To effectively train our model, we propose a loss function that includes a classification objective with image-labels, self-supervised regularization and entropy minimization between the CAM predictions. Experimental results on two curated datasets show that our approach is comparable to fully-supervised methods and can outperform existing state-of-the-art patch-based methods. https://github.com/PhilipChicco/wsshisto.


Asunto(s)
Algoritmos , Aprendizaje Automático Supervisado , Humanos
7.
Med Image Anal ; 72: 102105, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34102477

RESUMEN

Chest computed tomography (CT) based analysis and diagnosis of the Coronavirus Disease 2019 (COVID-19) plays a key role in combating the outbreak of the pandemic that has rapidly spread worldwide. To date, the disease has infected more than 18 million people with over 690k deaths reported. Reverse transcription polymerase chain reaction (RT-PCR) is the current gold standard for clinical diagnosis but may produce false positives; thus, chest CT based diagnosis is considered more viable. However, accurate screening is challenging due to the difficulty in annotation of infected areas, curation of large datasets, and the slight discrepancies between COVID-19 and other viral pneumonia. In this study, we propose an attention-based end-to-end weakly supervised framework for the rapid diagnosis of COVID-19 and bacterial pneumonia based on multiple instance learning (MIL). We further incorporate unsupervised contrastive learning for improved accuracy with attention applied both in spatial and latent contexts, herein we propose Dual Attention Contrastive based MIL (DA-CMIL). DA-CMIL takes as input several patient CT slices (considered as bag of instances) and outputs a single label. Attention based pooling is applied to implicitly select key slices in the latent space, whereas spatial attention learns slice spatial context for interpretable diagnosis. A contrastive loss is applied at the instance level to encode similarity of features from the same patient against representative pooled patient features. Empirical results show that our algorithm achieves an overall accuracy of 98.6% and an AUC of 98.4%. Moreover, ablation studies show the benefit of contrastive learning with MIL.


Asunto(s)
COVID-19 , Neumonía Viral , Humanos , Pandemias , SARS-CoV-2 , Tomografía Computarizada por Rayos X
8.
J Korean Med Sci ; 36(5): e46, 2021 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-33527788

RESUMEN

BACKGROUND: It is difficult to distinguish subtle differences shown in computed tomography (CT) images of coronavirus disease 2019 (COVID-19) and bacterial pneumonia patients, which often leads to an inaccurate diagnosis. It is desirable to design and evaluate interpretable feature extraction techniques to describe the patient's condition. METHODS: This is a retrospective cohort study of 170 confirmed patients with COVID-19 or bacterial pneumonia acquired at Yeungnam University Hospital in Daegu, Korea. The Lung and lesion regions were segmented to crop the lesion into 2D patches to train a classifier model that could differentiate between COVID-19 and bacterial pneumonia. The K-means algorithm was used to cluster deep features extracted by the trained model into 20 groups. Each lesion patch cluster was described by a characteristic imaging term for comparison. For each CT image containing multiple lesions, a histogram of lesion types was constructed using the cluster information. Finally, a Support Vector Machine classifier was trained with the histogram and radiomics features to distinguish diseases and severity. RESULTS: The 20 clusters constructed from 170 patients were reviewed based on common radiographic appearance types. Two clusters showed typical findings of COVID-19, with two other clusters showing typical findings related to bacterial pneumonia. Notably, there is one cluster that showed bilateral diffuse ground-glass opacities (GGOs) in the central and peripheral lungs and was considered to be a key factor for severity classification. The proposed method achieved an accuracy of 91.2% for classifying COVID-19 and bacterial pneumonia patients with 95% reported for severity classification. The CT quantitative parameters represented by the values of cluster 8 were correlated with existing laboratory data and clinical parameters. CONCLUSION: Deep chest CT analysis with constructed lesion clusters revealed well-known COVID-19 CT manifestations comparable to manual CT analysis. The constructed histogram features improved accuracy for both diseases and severity classification, and showed correlations with laboratory data and clinical parameters. The constructed histogram features can provide guidance for improved analysis and treatment of COVID-19.


Asunto(s)
COVID-19/diagnóstico por imagen , Pulmón/diagnóstico por imagen , Neumonía Bacteriana/diagnóstico por imagen , Síndrome de Dificultad Respiratoria/diagnóstico por imagen , Tomografía Computarizada por Rayos X , Adulto , Anciano , Algoritmos , Inteligencia Artificial , Análisis por Conglomerados , Aprendizaje Profundo , Femenino , Humanos , Masculino , Persona de Mediana Edad , Reconocimiento de Normas Patrones Automatizadas , Reproducibilidad de los Resultados , República de Corea/epidemiología , Síndrome de Dificultad Respiratoria/complicaciones , Estudios Retrospectivos , Índice de Severidad de la Enfermedad , Máquina de Vectores de Soporte
9.
IEEE Access ; 7: 18382-18391, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30956927

RESUMEN

Perivascular spaces (PVS) in the human brain are related to various brain diseases. However, it is difficult to quantify them due to their thin and blurry appearance. In this paper, we introduce a deep-learning-based method, which can enhance a magnetic resonance (MR) image to better visualize the PVS. To accurately predict the enhanced image, we propose a very deep 3D convolutional neural network that contains densely connected networks with skip connections. The proposed networks can utilize rich contextual information derived from low-level to high-level features and effectively alleviate the gradient vanishing problem caused by the deep layers. The proposed method is evaluated on 17 7T MR images by a twofold cross-validation. The experiments show that our proposed network is much more effective to enhance the PVS than the previous PVS enhancement methods.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...